458 research outputs found

    Asymmetric binary covering codes

    Get PDF
    An asymmetric binary covering code of length n and radius R is a subset C of the n-cube Q_n such that every vector x in Q_n can be obtained from some vector c in C by changing at most R 1's of c to 0's, where R is as small as possible. K^+(n,R) is defined as the smallest size of such a code. We show K^+(n,R) is of order 2^n/n^R for constant R, using an asymmetric sphere-covering bound and probabilistic methods. We show K^+(n,n-R')=R'+1 for constant coradius R' iff n>=R'(R'+1)/2. These two results are extended to near-constant R and R', respectively. Various bounds on K^+ are given in terms of the total number of 0's or 1's in a minimal code. The dimension of a minimal asymmetric linear binary code ([n,R]^+ code) is determined to be min(0,n-R). We conclude by discussing open problems and techniques to compute explicit values for K^+, giving a table of best known bounds.Comment: 16 page

    Impact of Interoperability on CAD-IP Reuse: An Academic Viewpoint

    Get PDF
    Mind-boggling complexity of EDA tools necessitates reuse of intellectual property in any large-scale commercial or academic operation. However, due to the nature of software, a tool component remains an ill-defined concept, in contrast to a hardware component (core) with its formally specified functions and interfaces. Furthermore, EDA tasks often evolve rapidly to fit new manufacturing contexts or new design approaches created by circuit designers; this leads to moving targets for CAD software developers. Yet, it is uneconomical to write off tool reuse as simply an endemic “software problem”. Our main message is that CAD tools should be planned and designed in terms of reusable components and glue code. This implies that industrial and academic research should focus on (1) formulating practical tool components in terms of common interfaces, (2) implementing such components, and (3) performing detailed evaluations of such components. While this is reminiscent of hardware reuse, most existing EDA tools are designed as stand-alone programs and interface through files. 1

    Spectral partitioning with multiple eigenvectors

    Get PDF
    AbstractThe graph partitioning problem is to divide the vertices of a graph into disjoint clusters to minimize the total cost of the edges cut by the clusters. A spectral partitioning heuristic uses the graph's eigenvectors to construct a geometric representation of the graph (e.g., linear orderings) which are subsequently partitioned. Our main result shows that when all the eigenvectors are used, graph partitioning reduces to a new vector partitioning problem. This result implies that as many eigenvectors as are practically possible should be used to construct a solution. This philosophy is in contrast to that of the widely used spectral bipartitioning (SB) heuristic (which uses only a single eigenvector) and several previous multi-way partitioning heuristics [8, 11, 17, 27, 38] (which use k eigenvectors to construct k-way partitionings). Our result motivates a simple ordering heuristic that is a multiple-eigenvector extension of SB. This heuristic not only significantly outperforms recursive SB, but can also yield excellent multi-way VLSI circuit partitionings as compared to [1, 11]. Our experiments suggest that the vector partitioning perspective opens the door to new and effective partitioning heuristics. The present paper updates and improves a preliminary version of this work [5]

    Assessment of Reinforcement Learning for Macro Placement

    Full text link
    We provide open, transparent implementation and assessment of Google Brain's deep reinforcement learning approach to macro placement and its Circuit Training (CT) implementation in GitHub. We implement in open source key "blackbox" elements of CT, and clarify discrepancies between CT and Nature paper. New testcases on open enablements are developed and released. We assess CT alongside multiple alternative macro placers, with all evaluation flows and related scripts public in GitHub. Our experiments also encompass academic mixed-size placement benchmarks, as well as ablation and stability studies. We comment on the impact of Nature and CT, as well as directions for future research.Comment: There are eight pages and one page for reference. It includes five figures and seven tables. This paper has been invited to ISPD 202

    PROBE3.0: A Systematic Framework for Design-Technology Pathfinding with Improved Design Enablement

    Full text link
    We propose a systematic framework to conduct design-technology pathfinding for PPAC in advanced nodes. Our goal is to provide configurable, scalable generation of process design kit (PDK) and standard-cell library, spanning key scaling boosters (backside PDN and buried power rail), to explore PPAC across given technology and design parameters. We build on PROBE2.0, which addressed only area and cost (AC), to include power and performance (PP) evaluations through automated generation of full design enablements. We also improve the use of artificial designs in the PPAC assessment of technology and design configurations. We generate more realistic artificial designs by applying a machine learning-based parameter tuning flow. We further employ clustering-based cell width-regularized placements at the core of routability assessment, enabling more realistic placement utilization and improved experimental efficiency. We demonstrate PPAC evaluation across scaling boosters and artificial designs in a predictive technology node.Comment: 14 pages, 17 figures, submitted to IEEE Trans. on CA

    Performance Analysis of DNN Inference/Training with Convolution and non-Convolution Operations

    Full text link
    Today's performance analysis frameworks for deep learning accelerators suffer from two significant limitations. First, although modern convolutional neural network (CNNs) consist of many types of layers other than convolution, especially during training, these frameworks largely focus on convolution layers only. Second, these frameworks are generally targeted towards inference, and lack support for training operations. This work proposes a novel performance analysis framework, SimDIT, for general ASIC-based systolic hardware accelerator platforms. The modeling effort of SimDIT comprehensively covers convolution and non-convolution operations of both CNN inference and training on a highly parameterizable hardware substrate. SimDIT is integrated with a backend silicon implementation flow and provides detailed end-to-end performance statistics (i.e., data access cost, cycle counts, energy, and power) for executing CNN inference and training workloads. SimDIT-enabled performance analysis reveals that on a 64X64 processing array, non-convolution operations constitute 59.5% of total runtime for ResNet-50 training workload. In addition, by optimally distributing available off-chip DRAM bandwidth and on-chip SRAM resources, SimDIT achieves 18X performance improvement over a generic static resource allocation for ResNet-50 inference

    Effective Iterative Techniques for Fingerprinting Design IP

    Get PDF
    Fingerprinting is an approach that assigns a unique and invisible ID to each sold instance of the intellectual property (IP). One of the key advantages fingerprinting-based intellectual property protection (IPP) has over watermarking-based IPP is the enabling of tracing stolen hardware or software. Fingerprinting schemes have been widely and effectively used to achieve this goal; however, their application domain has been restricted only to static artifacts, such as image and audio, where distinct copies can be obtained easily. In this paper, we propose the first generic fingerprinting technique that can be applied to an arbitrary synthesis (optimization or decision) or compilation problem and, therefore to hardware and software IPs. The key problem with design IP fingerprinting is that there is a need to generate a large number of structurally unique but functionally and timing identical designs. To reduce the cost of generating such distinct copies, we apply iterative optimization in an incremental fashion to solve a fingerprinted instance. Therefore, we leverage on the optimization effort already spent in obtaining previous solutions, yet we generate a uniquely fingerprinted new solution. This generic approach is the basis for developing specific fingerprinting techniques for four important problems in VLSI CAD: partitioning, graph coloring, satisfiability, and standard-cell placement. We demonstrate the effectiveness of the new fingerprinting-based IPP techniques on a number of standard benchmarks

    Habitat Specialization in Tropical Continental Shelf Demersal Fish Assemblages

    Get PDF
    The implications of shallow water impacts such as fishing and climate change on fish assemblages are generally considered in isolation from the distribution and abundance of these fish assemblages in adjacent deeper waters. We investigate the abundance and length of demersal fish assemblages across a section of tropical continental shelf at Ningaloo Reef, Western Australia, to identify fish and fish habitat relationships across steep gradients in depth and in different benthic habitat types. The assemblage composition of demersal fish were assessed from baited remote underwater stereo-video samples (n = 304) collected from 16 depth and habitat combinations. Samples were collected across a depth range poorly represented in the literature from the fringing reef lagoon (1–10 m depth), down the fore reef slope to the reef base (10–30 m depth) then across the adjacent continental shelf (30–110 m depth). Multivariate analyses showed that there were distinctive fish assemblages and different sized fish were associated with each habitat/depth category. Species richness, MaxN and diversity declined with depth, while average length and trophic level increased. The assemblage structure, diversity, size and trophic structure of demersal fishes changes from shallow inshore habitats to deeper water habitats. More habitat specialists (unique species per habitat/depth category) were associated with the reef slope and reef base than other habitats, but offshore sponge-dominated habitats and inshore coral-dominated reef also supported unique species. This suggests that marine protected areas in shallow coral-dominated reef habitats may not adequately protect those species whose depth distribution extends beyond shallow habitats, or other significant elements of demersal fish biodiversity. The ontogenetic habitat partitioning which is characteristic of many species, suggests that to maintain entire species life histories it is necessary to protect corridors of connected habitats through which fish can migrate
    • …
    corecore